The “Psychological” Limits of Neural Computation

نویسنده

  • RAZVAN ANDONIE
چکیده

Recent results changed essentially our view concerning the generality of neural networks' models. Presently, we know that such models i) are more powerful than Turing machines if they have an infinite number of neurons, ii) are universal approximators, iii) can represent any logical function, iv) can solve efficiently instances of NP-complete problems. In a previous paper [1], we discussed the computational capabilities of artificial neural networks vis-a-vis the assumptions of classical computability. We continue here this discussion, concentrating on the worst case “psychological” limits of neural computation. Meanwhile, we state some open problems and presumptions concerning the representation of logical functions and circuits by neural networks. 1. Neural networks and Turing machines It is known that each algorithm is Turing solvable. In the context of function computability, the Church-Turing thesis states that each intuitively computable function is Turing computable. The languages accepted by Turing machines form the recursively enumerable language family L0 and, according to the Church-Turing thesis, L0 is also the class of algorithmic computable sets. In spite of its generality, the Turing model can not solve any problem. Recall, for example, that the halting problem is Turing unsolvable: it is algorithmic undecidable if an arbitrary Turing machine will eventually halt when given some specified, but arbitrary, input. McCulloch and Pitts [28] asserted that neural networks are computationally universal. A neural network implementation of a Turing machine was provided by Franklin and Garzon [13, 14]. The consequence is, of course, that any algorithmic problem that is Turing solvable can be encoded as a problem solvable by a neural network: neural networks are at least as powerful as Turing machines. The converse has been widely presumed true [18], since computability has become synonymous with Turing computability. Nonetheless, Franklin and Garzon [13, 14] proved that the halting problem, when suitably encoded on a neural network, is solvable by an infinite neural network. Despite its appearance, this result is not a counterexample to the Church-Turing thesis since the thesis concerns only algorithmic problems solvable by finitary means. Intuitively, it is not a surprise that a device with an infinite number of processing units is more complex than a Turing machine. The infinite time complexity of the halting problem on a Turing machine has been transferred into the infinite number of neurons. The consequence of this property has a fundamental importance: infinite neural networks are more powerful than Turing machines. In: Dealing with Complexity: A Neural Network Approach, M. Karny, K. Warwick, V. Kurkova (eds.), Springer–Verlag, London, 1997, 252-263. 2. Function approximation The approximation (or prediction) of functions that are known only at a certain number of discrete points is a classical application of multilayered neural networks. Of fundamental importance was the discovery [20] that a classical mathematical result of Kolmogorov (1957) was actually a statement that for any continuous mapping there must exist a three-layered feedforward neural network of continuous type neurons (having an input layer with n neurons, a hidden layer with (2n+1) neurons, and an output layer with m neurons) that implements f exactly. This existence result was the first step. Cybenko [10] showed that any continuous function defined on a compact subset of f n n : [ , ] 0 1 ⊂ → R R m R n can be approximated to any desired degree of accuracy by a feedforward neural network with one hidden layer using sigmoidal nonlinearities. Many other papers have investigated the approximation capability of three-layered networks in various ways (see [37]). More recently, Chen et al. [8] pointed out that the boundedness of the sigmoidal function plays an essential role for its being an activation function in the hidden layer, i.e., instead of continuity or monotony, the boundedness of sigmoidal functions ensures the network’s approximation capability of functions defined on compact sets in R n . In addition to sigmoid functions, many others can be used as activation functions of universal approximator feedforward networks [9]. Girosi and Poggio [17] proved that radial basis function networks also have universal approximation property. Consequently, a feedforward neural network with a single hidden layer has sufficient flexibility to approximate with a given error any continuous function defined on a compact (these conditions may be relaxed). However, there are at least three important limitations: a) These existence proofs are rather formal and do not guarantee the existence of a reasonable representation in practice. In contrast, a constructive proof was given [27] for networks with two hidden layers. An explicit numerical implementation of the neural representation of continuous functions was recently discovered by Sprecher [35,36]. b) The arbitrary accuracy by which such networks are able to approximate any quite general function is based on the assumption that arbitrary large parameters (weights and biases) and enough hidden units are available. However, in practical situations, both the size of parameters and the numbers of hidden neurons are bounded. The problem of how the universal approximation property can be achieved constraining the size of parameters and the number of hidden units was recently examined by Kurková [26]. c) These “universal approximation” proofs are commonly used to justify the notion that neural networks can “do anything” (in the domain of function approximation). What is not considered by this proofs is that networks are simulated on computers with finite accuracy. Wray and Green [38] showed that approximation theory results cannot be used blindly without consideration of numerical accuracy limits, and that these limitations constrain the approximation ability of neural networks. The relationship between networks with one hidden layer and networks with several hidden layers is not yet well understood. Although one hidden layer is always enough, in solving particular problems it is often essential to have more hidden layers. This is because for many problems an approximation with one hidden layer would require an impractically large number of hidden neurons, whereas an adequate solution can be obtained with a tractable network size by using more than one hidden layer. 3. Representation of logical functions using neural networks A finite mapping is defined as a mapping from a finite subset of the Euclidian space

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning Curve Consideration in Makespan Computation Using Artificial Neural Network Approach

This paper presents an alternative method using artificial neural network (ANN) to develop a scheduling scheme which is used to determine the makespan or cycle time of a group of jobs going through a series of stages or workstations. The common conventional method uses mathematical programming techniques and presented in Gantt charts forms. The contribution of this paper is in three fold. First...

متن کامل

The mediating role of Self-Controlled and conflict parents-child in relationship between Psychological Neural Function with Oppositional Defiant Disorder in Deaf Children

Introduction: Hearing impairment in children can cause psychological and communication problems, So the purpose of this research was to investigate the mediating role of Self-Controlled and conflict parents-child in relationship between Psychological Neural Function with Oppositional Defiant Disorder in Deaf Children. Methods: The research method was descriptive-correlational (Structural Equati...

متن کامل

On the neural computation of utility

This article appeared originally in Current Directions in Psychological Science, 1996, 5(2), 37-43 ( American Psychological Society) and is reprinted with the permission of Cambridge University Press. All reproduction, copying or distribution of this material in any format, beyond single copying by an authorized individual for personal use, must first receive the written consent of Cambridge U...

متن کامل

Physical principles in neural coding and computation: Lessons from the fly visual system

Successful organisms solve a myriad of problems in the face of profound physical constraints, and it is a challenge to quantify this functionality in a language that parallels our characterization of other physical systems. Strikingly, when we do this the performance of biological systems often approaches the limits set by basic physical principles. Here we describe our exploration of functiona...

متن کامل

A robust engineering approach for wind turbine blade profile aeroelastic computation

Wind turbines are important devices that extract clean energy from wind flow. The efficiency of wind turbines should be examined under various working conditions in order to estimate off-design performance. Numerous aerodynamic and structural research works have been carried out to compute aeroelastic effects on wind turbines. Most of them suffer from either the simplicity of the modelling ...

متن کامل

Integration of Reinforcement Learning and Optimal Decision-Making Theories of the Basal Ganglia

This article seeks to integrate two sets of theories describing action selection in the basal ganglia: reinforcement learning theories describing learning which actions to select to maximize reward and decision-making theories proposing that the basal ganglia selects actions on the basis of sensory evidence accumulated in the cortex. In particular, we present a model that integrates the actor-c...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2003